107 research outputs found

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    Ground-nesting insects could use visual tracking for monitoring nest position during learning flights

    Get PDF
    Ants, bees and wasps are central place foragers. They leave their nests to forage and routinely return to their home-base. Most are guided by memories of the visual panorama and the visual appearance of the local nest environment when pinpointing their nest. These memories are acquired during highly structured learning walks or flights that are performed when leaving the nest for the first time or whenever the insects had difficulties finding the nest during their previous return. Ground-nesting bees and wasps perform such learning flights daily when they depart for the first time. During these flights, the insects turn back to face the nest entrance and subsequently back away from the nest while flying along ever increasing arcs that are centred on the nest. Flying along these arcs, the insects counter-turn in such a way that the nest entrance is always seen in the frontal visual field at slightly lateral positions. Here we asked how the insects may achieve keeping track of the nest entrance location given that it is a small, inconspicuous hole in the ground, surrounded by complex natural structures that undergo unpredictable perspective transformations as the insect pivots around the area and gains distance from it. We reconstructed the natural visual scene experienced by wasps and bees during their learning flights and applied a number of template-based tracking methods to these image sequences. We find that tracking with a fixed template fails very quickly in the course of a learning flight, but that continuously updating the template allowed us to reliably estimate nest direction in reconstructed image sequences. This is true even for later sections of learning flights when the insects are so far away from the nest that they cannot resolve the nest entrance as a visual feature. We discuss why visual goal-anchoring is likely to be important during the acquisition of visual-spatial memories and describe experiments to test whether insects indeed update nest-related templates during their learning flights. © 2014 Springer International Publishing Switzerland

    A model of ant route navigation driven by scene familiarity

    Get PDF
    In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints

    Electroluminescence from chirality-sorted (9,7)-semiconducting carbon nanotube devices

    Full text link
    We have measured the electroluminescence and photoluminescence of (9,7) semiconducting carbon nanotube devices and demonstrate that the electroluminescence wavelength is determined by the nanotube's chiral index (n,m). The devices were fabricated on Si3N4 membranes by dielectrophoretic assembly of tubes from monochiral dispersion. Electrically driven (9,7) devices exhibit a single Lorentzian shaped emission peak at 825 nm in the visible part of the spectrum. The emission could be assigned to the excitonic E22 interband transition by comparison of the electroluminescence spectra with corresponding photoluminescence excitation maps. We show a linear dependence of the EL peak width on the electrical current, and provide evidence for the inertness of Si3N4 surfaces with respect to the nanotubes optical properties.Comment: 6 pages, 3 figures, submitted to Optics Expres

    PU.1 controls fibroblast polarization and tissue fibrosis

    Get PDF
    Fibroblasts are polymorphic cells with pleiotropic roles in organ morphogenesis, tissue homeostasis and immune responses. In fibrotic diseases, fibroblasts synthesize abundant amounts of extracellular matrix, which induces scarring and organ failure. By contrast, a hallmark feature of fibroblasts in arthritis is degradation of the extracellular matrix because of the release of metalloproteinases and degrading enzymes, and subsequent tissue destruction. The mechanisms that drive these functionally opposing pro-fibrotic and pro-inflammatory phenotypes of fibroblasts remain unknown. Here we identify the transcription factor PU.1 as an essential regulator of the pro-fibrotic gene expression program. The interplay between transcriptional and post-transcriptional mechanisms that normally control the expression of PU.1 expression is perturbed in various fibrotic diseases, resulting in the upregulation of PU.1, induction of fibrosis-associated gene sets and a phenotypic switch in extracellular matrix-producing pro-fibrotic fibroblasts. By contrast, pharmacological and genetic inactivation of PU.1 disrupts the fibrotic network and enables reprogramming of fibrotic fibroblasts into resting fibroblasts, leading to regression of fibrosis in several organs

    Using deep autoencoders to investigate image matching in visual navigation

    Get PDF
    This paper discusses the use of deep autoencoder networks to find a compressed representation of an image, which can be used for visual naviga-tion. Images reconstructed from the compressed representation are tested to see if they retain enough information to be used as a visual compass (in which an image is matched with another to recall a bearing/movement direction) as this ability is at the heart of a visual route navigation algorithm. We show that both reconstructed images and compressed representations from different layers of the autoencoder can be used in this way, suggesting that a compact image code is sufficient for visual navigation and that deep networks hold promise for find-ing optimal visual encodings for this task

    Modelling human visual navigation using multi-view scene reconstruction

    Get PDF
    It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision

    Using an insect mushroom body circuit to encode route memory in complex natural environments

    Get PDF
    Ants, like many other animals, use visual memory to follow extended routes through complex environments, but it is unknown how their small brains implement this capability. The mushroom body neuropils have been identified as a crucial memory circuit in the insect brain, but their function has mostly been explored for simple olfactory association tasks. We show that a spiking neural model of this circuit originally developed to describe fruitfly (Drosophila melanogaster) olfactory association, can also account for the ability of desert ants (Cataglyphis velox) to rapidly learn visual routes through complex natural environments. We further demonstrate that abstracting the key computational principles of this circuit, which include one-shot learning of sparse codes, enables the theoretical storage capacity of the ant mushroom body to be estimated at hundreds of independent images

    One Step Nucleic Acid Amplification (OSNA) - a new method for lymph node staging in colorectal carcinomas

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Accurate histopathological evaluation of resected lymph nodes (LN) is essential for the reliable staging of colorectal carcinomas (CRC). With conventional sectioning and staining techniques usually only parts of the LN are examined which might lead to incorrect tumor staging. A molecular method called OSNA (One Step Nucleic Acid Amplification) may be suitable to determine the metastatic status of the complete LN and therefore improve staging.</p> <p>Methods</p> <p>OSNA is based on a short homogenisation step and subsequent automated amplification of cytokeratin 19 (CK19) mRNA directly from the sample lysate, with result available in 30-40 minutes. In this study 184 frozen LN from 184 patients with CRC were investigated by both OSNA and histology (Haematoxylin & Eosin staining and CK19 immunohistochemistry), with half of the LN used for each method. Samples with discordant results were further analysed by RT-PCR for CK19 and carcinoembryonic antigen (CEA).</p> <p>Results</p> <p>The concordance rate between histology and OSNA was 95.7%. Three LN were histology+/OSNA- and 5 LN histology-/OSNA+. RT-PCR supported the OSNA result in 3 discordant cases, suggesting that metastases were exclusively located in either the tissue analysed by OSNA or the tissue used for histology. If these samples were excluded the concordance was 97.2%, the sensitivity 94.9%, and the specificity 97.9%. Three patients (3%) staged as UICC I or II by routine histopathology were upstaged as LN positive by OSNA. One of these patients developed distant metastases (DMS) during follow up.</p> <p>Conclusion</p> <p>OSNA is a new and reliable method for molecular staging of lymphatic metastases in CRC and enables the examination of whole LN. It can be applied as a rapid diagnostic tool to estimate tumour involvement in LN during the staging of CRC.</p
    • …
    corecore